专利摘要:
MANAGEMENT OF REPLICATED VIRTUAL STORAGE IN RECOVERY SITES. The present invention relates to techniques involving storage replication. A representative technique includes apparatus and methods for receiving replicated virtual storage from a replicated virtual machine, which includes at least one replicated base virtual disk that substantially matches a primary base virtual disk to be replicated. Copies of differencing disks or other forms of virtual storage updates are received at a recovery location, where each of the differencing disks is associated with the primary base virtual disk as descendants of it. The received copies of the differencing disks are arrayed relative to the mirrored base virtual disk which matches the way the differencing disks were arrayed relative to the primary base virtual disk, thus keeping the replicated virtual machine data view in sync with the virtual machine at the primary location.
公开号:BR112013032923B1
申请号:R112013032923-8
申请日:2012-06-13
公开日:2021-08-24
发明作者:Phani Chiruvolu;Gaurav Sinha;Devdeep Singh;Jacob Oshins;Christopher L. Eck
申请人:Microsoft Technology Licensing, Llc;
IPC主号:
专利说明:

BACKGROUND
[0001] Considering the computing needs of businesses and individuals, the need for an uninterrupted computing service has become vital. Many organizations develop business continuity plans to ensure that critical business functions will enjoy continuous operation and remain available despite machine malfunctions, power outages, natural disasters, and other disruptions that can compromise normal business continuity.
[0002] Local outages can be caused, for example, by hardware or other failures on local servers, software or firmware issues that result in system hang and/or reboot, etc. On-premises solutions can include server clustering and virtualization techniques to make it easier to overcome failures. Local failover techniques using virtualization provide the ability to continue operation on a different machine or virtual machine if the original machine or virtual machine fails. The software may recognize that an operating system and/or application is no longer working and another instance of the operating system and application(s) may be launched on another machine or virtual machine to pick up where the previous one left off. For example, a hypervisor can be configured to determine that an operating system is no longer working, or application management software can determine that an application is no longer working which can, in turn, notify a hypervisor or operating system that a application is no longer working. Highly available solutions can configure fault remediation to occur, for example, from one machine to another to a common location or as described below from one location to another. Other failover configurations are also possible for other purposes such as testing, where failover can be further enabled from one virtual machine to another virtual machine within the same machine.
[0003] Disaster recovery refers to maintaining business continuity on a larger scale. Certain failure scenarios impact more than one operating system, virtual machine or physical machine. Malfunctions at a higher level cause power failures or other problems that affect an entire location, such as a business information technology (IT) or other computing center. Natural and other disasters can impact a business and can cause, and typically all, computing systems at the location to fail. To provide disaster recovery, businesses today can back up a functioning system to tape or other physical medium and email or otherwise deliver it to another location. When a data center becomes disconnected for any reason, the backup data center can take over operations with the backup media. Among other disadvantages, the process to provision physical media is laborious, backups have significant time gaps between them, and recovery systems can be days out of date. SUMMARY
[0004] Techniques involving storage replication, including virtual storage associated with virtual machines, are described. A representative technique includes an appliance that can receive replicated virtual storage from a replicated virtual machine, which includes at least one replicated base virtual disk that substantially corresponds to a primary base virtual disk to be replicated. A receiver receives copies of differencing disks or other forms of virtual storage updates, each of which is associated with the primary base virtual disk as descendants of it. A replication management module is configured to arrange the incoming copies of differencing disks against the replicated base virtual disk in a manner corresponding to how the differencing disks were arranged against the primary base virtual disk.
[0005] A particular implementation of such a technique involves the copies of differencing disks that are of a plurality of replication types or "copy", in which the replication management module is configured to arrange the plurality of replication types in relation to the replicated base virtual disk as they are arranged relative to the primary base virtual disk. Examples of the plurality of types include copies of differencing disks that were obtained after one or more applications operating on the virtual machine, those prepared for copying themselves, and copies of differencing disks that were obtained without notice or preparation for copying.
[0006] In another representative embodiment, a computer-implemented method to facilitate virtual storage replication is provided. A base virtual disk image of a virtual disk that is associated with a virtual machine is stored. Changes to the virtual disk are stored by recording changes to a current read-write differencing disk at the top of a disk chain that includes the base virtual disk image and any intervening differencing disks. On a regular or irregular basis, transferable copies of changes to the virtual disk are created for replicated storage by replicating the current read-write differencing disk and disallowing additional changes to it, creating a new current differencing disk at the top of the chain and offloading copies of the differencing disks to mirrored storage.
[0007] In another representative embodiment, one or more computer-readable media are provided that have instructions stored therein that are executable by a computer system to perform various functions. Functions include creating a chain of read-only snapshots of a virtual machine's differencing disk, with a new differencing disk being created with each snapshot providing read and write capability at the end of the chain. A plurality of snapshot types are included in the chain, including at least one application-consistent snapshot type and a crash-consistent snapshot type. A replicated chain of read-only snapshots is created that matches the chain of read-only snapshots on the virtual machine's differencing disk. Selecting one of the read-only snapshots in the replicated chain as a restore point to boot a replicated virtual machine is made easy. The replicated virtual machine is booted from the snapshot selected from the read-only snapshots and including one or more of the read-only snapshots that follow the selected restore point in succession.
[0008] This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This summary is not intended to identify the main features or essential features of the matter claimed, nor is it intended to be used to limit the scope of the matter claimed. BRIEF DESCRIPTION OF THE DRAWINGS
[0009] Figure 1 in general illustrates a representative modality to replicate virtual machines with the use of differencing disks; Figures 2A and 2B represent representative computing environments in which replication in accordance with disclosure can be implemented; Figure 3 illustrates a representative way in which a computer/primary server environment can facilitate its disaster recovery and have its data replicated as it changes over time; Figures 4A and 4B represent successive states of a representative disk chain as virtual storage changes and one or more copies are made; Figure 5 is a flowchart illustrating a representative way in which the data view of a replicated virtual machine is kept in sync with its counterpart on the primary server; Figure 6 represents a replicated virtual disk and a replicated disk chain that corresponds to the preserved portion of virtual storage at the primary location being replicated; Figures 7A to 7F illustrate a representative example for asynchronously replicating the storage of a virtual machine or other computing entity from a first computing environment to at least one other computing environment; Figure 8 illustrates a example of programming snapshots or other copies of a differencing or base disk; Figure 9 illustrates an example of binding and binding modifications of replicated differencing disks when accessing the disk chain; Figure 10 is a flowchart illustrating resources representative from the perspective of at least one primary server at the primary site that must have its storage replicated; Figure 11 is a flowchart illustrating representative features from the perspective of at least one recovery server at the recovery site that is replicating a virtual machine(s)( is); and Figure 12 represents a representative computing system in which the principles described in this document can be implemented. DETAILED DESCRIPTION
[00010] In the following description, reference is made to the attached drawings that represent representative implementation examples. It should be understood that other modalities and implementations can be used, as structural and/or operational changes can be made without departing from the scope of disclosure.
[00011] Disclosure is generally aimed at data replication and recovery. While the principles described in this document are applicable to any replication of data from one device or data storage facility to another device or facility and data, numerous modalities in this disclosure are described in the context of disaster recovery where replicated data and resources Processing is provided off-site from the primary computing center. It should be recognized, however, that the principles described in this document apply regardless of the distance or manner in which replicated data is transferred to the recovery site(s). Certain modalities are also described in the context of virtual machines, although the principles are equally applicable to physical machines and their available storage.
[00012] As noted above, system backup information is typically backed up to physical media and physically provided to a remote recovery location. When a data center goes offline, the backup data center can take over operations with the backup media. Repeatedly providing physical means to the recovery site is inconvenient. Recovery could involve using backup data already available at the recovery site, which could be a day or more earlier, or recovery might have to wait until more recent replication data at the recovery site. These solutions do not provide a high degree of business continuity.
[00013] A complete backup of data at a primary location could be obtained and delivered electronically to a recovery location. However, the size of data transmissions could be very large, making the regular organization and final transmission of such information unmanageable. Providing replicated data on a frequent basis to alleviate these transmission issues can result in less desirable recovery when needed, as changed data can be lost in the long interims of time between backups. Building disaster recovery strategies using backups in such ways leads to complex processes with very high recovery point objective (RPO) and recovery time objective (TO).
[00014] The disaster recovery process can be simplified by having a replicated copy of machine storage, or a virtual machine, in a different location from where the primary(s) are working. As used herein, unless otherwise noted, a “copy” generally refers to a replication of the virtual machine or virtual storage machine under discussion. Thus, "replication" and "copy" may be used interchangeably in this document. Updates from the primary server to the replicated copy of the virtual machine or storage can be done. Replication of a virtual machine differs from backing up an application or operating system stack in that replicating a virtual machine involves replicating both the storage and the virtual machine configuration and the workload arrives at the recovery site at a condition that does not require reconfiguration. For example, the virtual machine container will already have the correct number of network interfaces and other such settings and are configured the way the workload is expecting.
[00015] Disclosure provides mechanisms and methods that enable the replication of data associated with a virtual or physical machine. For example, in the context of virtual machines, disclosure provides ways to enable the replication of data from one or more virtual machines that can be held in the form of virtual disks or other similar files. Among other things, the disclosure also deals with mechanisms and methods that enable recovery site users to utilize a replicated copy of the virtual machine in the event that a disaster or other occurrence impacts the primary site's ability to run normally.
[00016] Various modalities below are described in terms of virtual machines. Virtualization in general refers to an abstraction of physical resources, which can be used in both client and server scenarios. Hardware emulation involves the use of software that represents the hardware with which the operating system would typically interact. Hardware emulation software can support guest operating systems and virtualization software such as a hypervisor can establish a virtual machine (VM) on which a guest operating system operates. Much of the description in this document is described in the context of virtual machines, but the principles are equally applicable to physical machines that do not employ virtualization.
[00017] Figure 1 in general illustrates a representative modality to replicate virtual machines using differencing disks. A first site 100 may include one or more host computing systems 102 to 104 which may host one or more virtual machines (VM) 106. The computing system 102 has associated storage and, in the example of Figure 1, the virtual machine 106 has associated virtual storage (VS) 108. Virtual storage can represent, for example, a virtual hard disk which in general represents logical storage typically provided as a disk image file(s). Virtual machine 106 perceives virtual storage 108 as its hard disk or other similar storage device.
[00018] In one embodiment, replicating stored data or other information in virtual storage 108 includes the use of a storage state chain or tree, where the top of the chain (also referred to herein as the tip of the tree) provides either read or write capability to record changes written to virtual storage. For example, virtual storage 108 can represent a virtual hard disk that has a virtual hard disk (VHD) file format. The storage tree may include a base virtual disk 110 and one or more differencing disks 112A through 112n that are associated with base virtual disk 110. Differentiating disk 112A which is the child of base virtual disk 110 captures changes to the virtual storage 108. As described more fully below, a differencing disk, such as differencing disk 112A, can be preserved through write protection, and a new differencing disk, such as differencing disk 112B, can be created to accept changes to virtual storage 108 from that point forward. This can continue for any number of 112n differencing disks, thus creating a chain of preserved virtual disks and an 112n differencing disk to capture new changes.
[00019] At least a second site 150 is provided including one or more host computing systems 152, 153, 154 where information replicated from the first site 100 can be received and stored and where recovery computing operations can be initiated in the event of a disaster or other event that renders the first site 100 unable to continue its computing responsibilities. The first location 100 and the second location 150 communicate via communication links 130, which may involve any type of electronic communication interface such as direct cabling, wired networks, wireless networks and the like, and any combination thereof. A replication of the virtual machine 106 may be provided to the second site 150 through electronic means or otherwise to provide the replicated virtual machine 156. Similarly, differencing disks 112A through 112n or other portions of virtual storage 108 are designed to capture changes to the virtual storage 108 can be transferred when the data has been protected from additional write operations, as described more fully below. The replicated virtual storage 158 therefore matches what was moved from virtual storage 108 at primary site 100.
[00020] Storage, such as virtual storage 108 at the first location 100, could stream its data to the second location asynchronously. However, in such an arrangement, if the first location 100 is at fault, it will be difficult for the second location 150 to know what has been successfully transferred and whether the storage is coherent. The present disclosure describes that snapshots (or other still images) of differencing disks 112A through 112n from first site 100 are created and transferred to second site 150. Using a snapshot creation feature enables asynchronous replication of storage 108 of a virtual machine 106 from one place to another. That way, if one primary server(s) at the first site 100 fails or another server becomes disconnected, there will be no difference between the snapshots and the data that has been replicated. Consequently, it will be known which data has been received at the second location 150. The disclosure thus contemplates the first location 100 which transfers snapshots or other images of differencing disks at particular times to the second location 150.
[00021] For example, when a replication of virtual storage 108 is obtained, it may still involve transferring the differencing disk data to the second location 150 and creating a new differencing disk. As a more particular example, a snapshot 114 or other replication/copy may be taken from differencing disk 112B to provide an image (e.g., AVHD image file) to host computing system 152 at second site 150. In one embodiment , differencing disk 112B from which snapshot 114 was taken will be changed to read-only and a new differencing disk 112n will be created as a read/write virtual storage file.
[00022] Some modalities involve different types of copies of differencing disks or other virtual storage images. Figure 1 depicts a plurality of such different "copy" or replication types, including copy type A 116, copy type B 118 through copy n type 120. For example, a first copy type such as copy type B of copy 118, may represent a low impact copy/snapshot of a differencing disk 112A to 112n that occurred without significant efforts to increase data coherence. One way to obtain such a copy is to mark the particular differencing disk read-only at any desired time and create a new differencing disk to capture later recorded data. For example, an instantaneous virtual machine can be achieved using, for example, virtualization software, a hypervisor, operating system functionality, etc., which can capture the state, data and hardware configuration of a working virtual machine . This type of copy or other similar low-impact copy might be referred to in this disclosure as a crash-consistent copy, as what is stored on the differencing disk generally matches what could be on the disk after a system crash or outage. power. In these cases, applications may be running that temporarily store data in intermediate provision storage or memory that has not been stored in memory. File system metadata may not all be managed to get to disk before it is marked read-only. With this type of copy, it is possible that an attempt to reanimate the copy at a recovery location will not go entirely well-as the data may not be completely coherent. However, this type of copying does not cause running programs to be interrupted and therefore is very low cost as it is tied to the system performance of the computing systems 102 to 104 at the first location 100.
[00023] Another copy type, such as copy type A 116, may represent a higher coherence copy/snapshot of a differencing disk 112A to 112n that occurred with some effort to increase data coherence before snapshot 114 was taken. For example, such snapshot 114 can be taken using an operating system service such as Volume Shadow Copy Service (VSS) by MICROSOFT® Corporation which coordinates between the backup functionality and the user applications that update the data on disk. Working software (ie data recorders) can be notified of an impending backup and bring your files into a consistent state. This copy type provides a higher probability of proper resuscitation at the second site 150. However, due to the fact that running applications may need to be prepared for I/O overflow backup, except its state, etc., the working workload is interrupted and subjected to latencies and lower throughput. Different copy types can be used at different times or on different schedules to provide a desired balance between workload interruption and data consistency.
[00024] As described above, the disclosure presents ways in which stored data associated with a physical machine(s) or virtual machine(s) is replicated from a first site 100 to at least a second site 150 The modalities involve providing snapshots or other copies of disk image portions, such as differencing disks, while allowing multiple types of copies to be obtained to regularly provide replicated data at a second location or "recovery" location while maintaining interrupts of processing in the first location or “primary” location at a manageable level. Snapshots transferred to the second location 150 can be chained together analogously to that at the primary location 100. In addition, the servers at both the first and second locations 100, 150 can facilitate the merging of write-protected differencing disks into their respective parent disks, to reduce storage capacity requirements, reduce access latencies, etc. As described more fully below, the differencing disks transferred by first site 100 are received at second site 150 and chained to the top of the existing mirrored copy virtual machine disk chain, thus keeping the mirrored virtual machine data view synchronized with that of the primary server(s).
[00025] Figures 2A and 2B represent representative computing environments in which replication in accordance with the disclosure can be implemented. The representative systems of Figures 2A and 2B are merely examples and clearly do not represent exclusive arrangements. The computing environment in Figure 2A illustrates a first location, such as a primary server location 200. In this example, the primary server location 200 includes one or more servers 202A through 202n or other computing devices. Each of the servers 200A through 200n may respectively include computing capabilities such as one or more physical or logical processors 204A, 204n, memory 206A, 206n, storage 208A, 208n, etc. Storage 208A, 208n can be replicated so that storage copies 210A, 210n such as storage snapshots can be provided to a recovery location 212 for disaster recovery purposes. Figure 2A illustrates that the techniques described in this document are applicable to any storage associated with a processor, as well as applicable to virtual storage and virtual machines. It should be understood that the retrieval location 212 may include servers or other computing devices that have similar processing, memory and storage capabilities.
[00026] Figure 2B illustrates an example involving one or more virtual machines. In that example, primary server locations 220 and recovery 250 respectively include one or more servers 222A through 222n, which each may include computing capabilities such as one or more physical or logical processors 224A, 224n, memory 226A, 226n, storage 228A, 228n, etc. One or more of the servers may include a 230A, 230n hypervisor or other virtual machine management module that features a virtual operating platform on which operating systems 232A, 232n and virtual machines 234A through 236A, 234n through 236n can operate. The 230A, 230n and/or 232A, 232n operating system hypervisor features can be used, adapted or added to provide functionality such as the 238A, 238n replication management module (RMM). In accordance with the present disclosure, replication management module 238A, 238n can provide functionality such as storing which changes (e.g., differencing disk) were the last changes to be transferred from primary location 220 to recovery location 250 , requesting that copies be made in response to schedules or other event triggers, preparing information for transfer to recovery location 250, merging differencing disks into their respective parent disks, etc. Virtual storage (not shown) is associated with each virtual machine, which can be stored in files on servers 222A, 222n memory 226A, 226n, local storage 228A, 228n, clustered storage (not shown) if servers 222A, 222n are configured in a cluster, etc. Virtual storage can be replicated such that storage snapshots or other copies 242A, 242n are provided to a recovery site(s) 250 for disaster recovery or other purposes. Figure 2B therefore illustrates that the techniques described in this document are applicable to virtual storage associated with a virtual machine. It should be understood that recovery site 250 may include servers or other computing devices that have analogous processing, virtual machine management and virtual machine capabilities as described in Figures 2A and/or 2B.
[00027] Figure 3 illustrates a representative way in which a computer/primary server environment can facilitate its disaster recovery and have its data replicated as it changes over time. Figure 4A represents a first state of a disk chain 404A and Figure 4B represents a second state of disk chain 404B. In the following example, Figures 3, 4A and 4B are referred to collectively.
[00028] A base 406 virtual disk image or other initial storage base of a virtual disk 402 of virtual machine 400 is stored as represented in block 300. As described further below, this base 406 virtual disk image can serve as the basis for replicating virtual storage to a recovery site. The base virtual disk image 406 can be presented as a file, such as, for example, a virtual hard disk (VHD) file.
[00029] As shown in block 302, changes to virtual disk 402 can be recorded on an actual differencing disk 410A (Figure 4A) which can be additionally written to be read. In one embodiment, the current differencing disk 410A is logically on top of a 404A disk chain that includes the base virtual disk image 406 and any intermediate differencing disks 408. If the current differencing disk 410A is the first disk of child differencing of parent base virtual disk 406, so there are no intermediate differencing disks. Furthermore, if the 408 intermediate differencing disks have already been merged into the 406 base virtual disk image, then there will be no intermediate differencing disks.
[00030] On the other hand, there may be 408 differencing disks that have been reserved as read-only, such as when a snapshot or other copy of that differencing disk should be preserved. So that the snapshot data (which can be transferred for replication) matches the differencing disk, the differencing disk can be write-protected in connection with the snapshot. In these cases, there might be one or more read-only differencing disks 408 between the base virtual disk image 406 and the current read/write differencing disk 410A. As described in more detail below, the chain of at least read-only disks at the primary site will be replicated to a recovery location(s), thus keeping the replicated virtual machine's data view synchronized with the server(s) correspondent(s) in the primary location.
[00031] At the same point, a copy of the current read/write differencing disk 410A will be created and write protected as shown in block 304. For example, a request 414 for a copy of the current differencing disk 410A can be made by a processor executable replication management module 412 which may be a service or resource of a hypervisor, host operating system, parent partition operating system, etc. As noted above, the type of copy to be made can also be noted, such as a crash-consistent copy where the application data may not have been prepared for the copy. Another example is an application-consistent copy that has a higher probability of appropriate subsequent resuscitation which might involve, for example, some notification to an application(s) to unload data/records and otherwise prepare for the snapshot or other copy.
[00032] The differencing disk that was at the top of the chain has been marked read-only, as represented by the R/O differencing disk 410B in Figure 4B. With that disk 410B having been write protected, block 306 shows that a new differencing disk 420 can be created as the new top of disk chain 404B to replace the differencing disk 410B that was just copied. This new “tree point” differencing disk 420 will take over the responsibilities of handling both read and write operations. In one embodiment, any unfused intermediate differencing disks 408, 410B and the base virtual disk 406 below it will remain read-only.
[00033] In one modality, the read-only base virtual disk 406 and any read-only intermediate differencing disks 408, 410B have been moved to the recovery location where the disk chain will be recreated for recovery purposes. This is noted at block 308, where the differencing disk 410B that was just copied and write protected can be moved to mirrored storage, such as transferred to a mirrored virtual machine address. Thus, when copy 416 is obtained, it can be transferred to a retrieval location by a transmitter, transceiver, network interface, and/or other mechanism represented by transmission device 418. As long as the primary location is operational and more copies are replicated as determined in decision block 310, the process of creating copies and write protection 304, creating new differencing disks 306, and transferring snapshots 308 can continue. For example, a replication policy or replication rules can be established to determine when a copy should be made and, in the case of multiple copy types, what type of copy should be made.
[00034] Figure 5 is a flowchart that illustrates a representative way in which the data view of a replicated virtual machine is kept in sync with its counterpart on the primary server. Figure 6 represents a mirrored virtual disk 600 and a mirrored disk chain 610 that corresponds to the preserved portion of virtual storage at the primary site being mirrored. In the following example, Figures 5 and 6 are referred to collectively.
[00035] As shown in block 500, a replicated virtual machine 600 is provided in a recovery location, where the replicated virtual machine 600 substantially matches the primary virtual machine that is to be replicated. This can be communicated electronically or delivered by other means. A mirrored virtual disk 602 or other mirrored virtual storage is provided as shown in block 502. The mirrored virtual disk 602 may include a mirrored base virtual disk 604 substantially corresponding to a primary base virtual disk to be mirrored (e.g., virtual disk base 400 of Figure 4).
[00036] A copy of a differencing disk 606 that is associated with the primary base virtual disk, such as being a farthest child or descendant of the primary base virtual disk, is received as shown in block 504. In one embodiment, a received copy is a type of one of the plurality of possible types of copies or replications. As shown in block 506, the copy received from differencing disk 606 is disposed relative to the replicated base virtual disk 604 in the way it was disposed relative to the primary base virtual disk at the primary site. For example, if one or more intermediate differencing disks exist at the primary location, then copies/snapshots of those differencing disks 608 will be received and arrayed on the mirrored disk chain 610 as the preserved differencing disks are arrayed on the primary disk chain (by example, disk chain 404A/B of Figures 4A, 4B). Although one or more of the intermediate differencing disks 608 can be merged into its base virtual disk 604 at the primary and/or recovery servers, the contents must remain in sync with the disk chain at the primary location.
[00037] If other differencing disks are received at the recovery location as determined in decision block 508, more differencing disks can be received 504 and arranged 506 to remain in sync with the primary location. Such replicated differencing disks, eg, snapshots or other copies, may be received by a receiver 612, which may represent a discrete receiver, transceiver, network interface, or any receiving mechanism. A replication management module 614 may be provided as a per-processor executable module in a recovery server(s), such as a server hypervisor, host operating system, parent partition operating system, etc. The replication management module 614 can perform tasks such as retaining information regarding what was the last change set (eg, differencing disk) to be received from the primary location, retaining the copy type (eg consistent with crash, application-consistent, etc.), determine which of a plurality of replicated differencing disks on which to begin processing if recovery operations are initialized, arrange the snapshots or other copies in a chain corresponding to that of the primary location and other functions described in this document.
[00038] As seen in the previous examples, the solutions provided in the present disclosure can use instantaneous or other replication features to create differencing disks at periodic intervals or otherwise. For example, hypervisors, other virtualization software and/or operating systems may include a snapshot feature that can be used to create differencing disks. In one modality, differencing disks at the top of the disk chain accumulate changes while read-only virtual hard disks further down the chain are transferred to the remote location, such as over a network. At the remote site, received differencing disks can be chained on top of the existing disk chain of the replicated copy of the virtual machine by creating and modifying snapshots, thus keeping the replicated virtual machine's data view in sync with the primary server.
[00039] With the use of such features, working virtual machine replication can be provided that ensures the accuracy of structurally replicated data. For example, using differencing disks to accumulate writes on the primary server and chaining the same disks to the replicated virtual machine at the remote site ensures that no writes will be lost by the replication process even in the event of power failures or system crashes at either end. This mechanism provides consistency of replicated data without exhibiting a resynchronization mechanism or consistency check mechanism.
[00040] The solutions described in this document enable the creation of a copy of a working virtual machine and periodic synchronization of its data from a primary location to a recovery location in a non-disruptive manner. For example, a mechanism is provided to create a copy of a virtual machine running from the primary site to a remote server by transferring virtual machine configuration and data disks over the network. Such a mechanism can allow the creation of differencing disks at the top of the virtual machine's hard disk chain, transferring the underlying read-only differencing disks over the network to the remote server, and chaining those disks to the virtual machine on the remote server. Creating differencing disks allows data transfer to take place without disruption to the running virtual machine.
[00041] Application-consistent points in time can be generated for the replicated copy of the virtual machine. For example, snapshots (VSS) can be used, which allows the application inside the virtual machine to offload and disable its writes so that the data up to that point provides a higher guarantee of recoverability. One example methodology uses VSS snapshots to enable recovery of such higher guaranteed restore points on the recovery server by rolling back the VSS snapshots.
[00042] The solutions described in this document provide the ability to produce a replicated virtual machine with data from a point in a plurality of points in time and to subsequently change the point in time if desired. For example, the engine creates and modifies snapshots to chain differencing disks received from the primary server. Snapshots represent points in time of replicated virtual machine data. The method provides a mechanism to produce the replicated virtual machine by creating a differencing disk for that point in time and using it to produce the virtual machine. These recovery-site differencing disks capture any writes generated by the replicated virtual machine. If the user chooses to change the point in time subsequently, the differencing disk can be discarded in favor of a new differencing disk that is created in relation to the new point in time.
[00043] These methodologies further allow replication to continue while running a “test” on the replicated copy of the virtual machine, without making a copy of the replicated virtual machine's virtual hard disks. For example, the method provides the possibility to generate a “test” copy of the replicated virtual machine using two (or more) sets of differencing disks that indicate the same parent virtual hard disks. Writes executed from the “test” virtual machine are captured on a set of differencing disks that are discarded when the test completes. Periodic sync changes or “deltas” arriving from the primary server can be collected on the other set of differencing disks, which can be merged into the parent once testing is complete.
[00044] The solution also provides the possibility to continue replication while the initial replica for the virtual machine (eg the base virtual disk) is being transported out of band. The mechanism provides support for transporting the initial virtual machine replica “out-of-band”, that is, out of the network transport channel used to transport data from the primary location to the remote location. A differencing disk can be created at the remote site that is indicating (or “looking like”) an empty virtual hard disk, where subsequent differencing disks received from the primary server during replication are chained on top of the created differencing disk. When the out-of-band replica is received at the remote site, the differencing disk that was created to point to the empty virtual hard disk can be “reparented” to point to the virtual hard disks in the initial replica.
[00045] An example representation of many of these points is now provided, which presents a representative example of a sequence of replication events according to revelation. Figures 7A to 7F illustrate a representative example for asynchronously replicating storage of a virtual machine or other computing entity from a first computing environment to at least one other computing environment; Where appropriate, like reference numbers are used throughout Figures 7A through 7F to identify like items.
[00046] In this example, a primary compute site 700 represents the first compute environment and a second compute or "recovery" site 750 represents the second compute environment. Primary location 700 includes one or more operating computing devices (eg servers) as well as recovery location 750. Recovery location 750 represents one or more computing devices/servers that can receive virtual disks or other files storage for preservation and possible resuscitation in the event of a disaster or other event that impacts the primary site 700's ability to perform its services.
[00047] While the present disclosure is applicable to storage replication tracking of any data device or structure that uses a native computing system, one modality involves tracking and replicating changes to a virtual disk used by a hypervisor-based virtualization system . In such a system, to track and replicate changes to the virtual disk used by a virtual machine, a differencing disk can be used for a running virtual machine. When a virtual machine is configured for tracking, a 702 base virtual disk associated with primary site computing device(s) 700 will be transferred or otherwise provided to the computing device(s) at the primary site. recovery 750. This is represented by the replicated base virtual disk 752 at recovery location 750.
[00048] When base virtual disk 702 has been write protected and copied to recovery location 750, a first differencing disk D1 704 is created to capture any new writes involving the virtual disk. In other words, any changes to the virtual disk will then be made to differencing disk 704 at primary site 700, while at that point the recovery site has preserved the virtual disk in the replicated base virtual disk state 752. In one modality, replication ( eg replication management module) at both primary site 700 and recovery site 750 will store situation information indicating that the transfer from base virtual disk 702 to replicated base virtual disk 752 is the latest change set to be transferred. If the corresponding virtual machine (or other computing system) at primary site 700 is at fault or otherwise unable to perform its services at that point, the hardware at recovery site 750 could start functioning from the state of the corresponding virtual storage for mirrored base virtual disk 752.
[00049] Storage copies can be ordered from any temple, including in connection with a schedule. For example, replication management at primary site 700 might make a request for a copy of virtual storage after some time has elapsed, at a particular time, as a result of an event occurring, etc. The present disclosure contemplates multiple types of copies that may be created and transmitted to retrieval location 750, each of which may have its own schedule or other triggering criteria.
[00050] Referring briefly to Figure 8, an example of programming snapshots or other copies of a differencing base or disk is shown. A policy 800 may be stored in memory or storage 802, which may be storage associated with the host computing system or otherwise. In one embodiment, policy 800 includes rules for requesting copies of a differencing disk for replication to the recovery server. In the illustrated embodiment, two types of copies are considered, although more or less types of copies can be implemented. In this example, policy 800 includes copy instructions for crash-consistent 804 copies, such as any one or more of a specific 806 time a copy should be taken, a fixed or variable 808 time interval between crash-consistent copies. obtained, another event 810 trigger that initiates a request for a crash-consistent copy, and the like. Similarly, policy 800 may include analogous possibilities 816, 818, 820 for application-consistent 814 copies, although specific 816 times, 818 intervals, and/or event trigger 820 may differ from the other copy(s) . A controller 824, which may include a processor and executable software, may execute policy 800. For example, the controller may execute a program(s), such as the replication management module described above, to perform timer functions 826 , 828 event monitoring, and/or 830 snapshot monitoring based on policy 800. In other embodiments, snapshots may be provided by other controller-executable programs, such as the volume shadow copy service (VSS) previously described.
[00051] Returning now to the example in Figures 7A through 7F, Figure 7B assumes that replication management has requested a first copy type, referred to in this example as a crash-consistent copy. Such a copy can represent a copy of virtual storage at any given time. For example, a crash-consistent copy can be made by stopping write activity to the D1 704 differencing disk at any time. In one embodiment, when a crash-consistent copy of the virtual disk is requested, the virtual disk is closed for further write operations and a new D2 differencing disk 706 is created to capture any new writes involving the virtual disk. Where differencing disk D1 704 is closed to new writes (for example, marked read-only) at any unprepared or unprepared arbitrary time, the working workload of the virtual machine (or other computing system) is not interrupted . Although this type of copy allows the workload to continue operating without interruption and at normal speed, one possible consequence is that subsequent attempts to reanimate the copy at a recovery location 750 could potentially fail due to the D1 differencing disk copy 704 which was obtained at an arbitrary time.
[00052] As noted above, another D2 differencing disk 706 is created to enable information to be written to the virtual disk when D1 704 was kept for the last transfer to recovery location 750. The D1 differencing disk 704 is made available to replication management for offloading to one or more recovery servers at recovery site 750, as represented by replicated differencing disk D1 754 at recovery site 750. Updates to the virtual disk from primary site 700 from this point forward are captured on the new disk of differentiation D2 706.
[00053] In one modality, the information stored on differencing disk D1 704 is changed to read-only so that it can no longer be modified by data writes. Instead, the new differencing disk D2 706 is configured to be written to and thus log changes to the virtual disk. Replication management at primary site 700 might merge read-only differencing disk D1 704 into its parent disk, which is base virtual disk 702 in this example. An example of such a merge is shown in Figure 7C, where D1 704 has been merged into base virtual disk 702 to provide the new merged virtual disk 708.
[00054] One purpose of performing a merge function is to reduce the number of links a read operation can be subjected to in order to locate the data stored on the virtual disk. Referring now to Figure 9, an example of such a connection is described. It is assumed that a copy 901 of a base virtual disk 902 has been provided to recovery servers 950, as represented by mirrored base virtual disk 952. A newly created differencing disk (for example, differencing disk 904) will include a indicator 906 or link to its parent disk which is also the previous “tree tip” disk. In this example, differencing disk 904 could include an indicator 906 for base virtual disk 902. If a read operation 908 is issued on primary servers 900 for data not found on the new differencing disk 904, the read operation 908 it can get the data at a further location in the disk chain that is specified by indicator 906, link, or other analogous forwarding mechanism. In this example, the read operation 908 could obtain the data from the base disk 902 based on the indicator 906 on the differencing disk 904, if the differencing disk 904 does not have the data associated with the read request.
[00055] A copy 910 of differencing disk 904 is provided to recovery servers 950, as represented by mirrored differencing disk 954. When differencing disk 904 is write protected and copied 910 to recovery servers 950, a new one differencing disk 912 is created to accept changes to the virtual disk, such as through write operations 909. The new differencing disk 912 may include a link or pointer 914 to its parent, which is differencing disk 904 in this example . A read operation 908 may be issued for data that is not found on either one of the differencing disk 912 or the differencing disk 904, in which case the links or indicators 914, 906 indicate back to the base virtual disk 702 are provided to locate the addressed data.
[00056] Depending on the number of differencing disks that have been preserved in a read-only state, there could be numerous links to locate data already on base virtual disk 702. In order to reduce the overhead associated with such a link, differencing disks on 900 primary servers that have been reserved as read-only and transferred to 950 recovery servers can be merged with their respective parent disks. Any desired differencing disks, even all, that have been marked read-seed or otherwise preserved for replication and transferred can be merged. As more fully described below, such a merger may also be implemented at recovery site 950.
[00057] Returning now to the example in Figures 7A through 7F, Figure 7C assumes that replication management has requested a second type of copy of the virtual disk, referred to in this example as an application-consistent copy. Where the first copy type described in this example is a crash-consistent copy which is a general storage snapshot of the working system, an application-consistent copy in this example in general refers to a storage snapshot of the working system. who was prepared, himself, to have the snapshot taken. Where storage is prepared in this way, the snapshot is consistent in that it facilitates a high probability of successful resuscitation at replication site 750. A copy that was not itself prepared for the snapshot to be taken (eg copy consistent with crash) may not be consistent upon resuscitation. For example, in the case of a crash-consistent copy, file system metadata, database metadata, and/or other information may not make it to disk.
[00058] In one embodiment, a pre-prepared copy such as an application-consistent copy may be made in connection with a management module that informs the software on the system that a copy must be made. For example, Volume Shadow Copy Service (VSS) includes a process where running software such as databases on the system can optionally record a notification that informs the software of an imminent storage copy or snapshot, which gives the software time to provide records on a portion of the disk image that is preserved.
[00059] If replication management at primary site 700 makes a request for such a pre-prepared or “application-consistent” copy, then VSS or another management module for the operating system may be involved to manage the set of snapshots. When the snapshot is created, the D2 differencing disk 706 shown in Figure 7C can be converted to read-only and another new read/write differencing disk D3 710 can be created to now capture the data that is written to the virtual disk. With the D3 differencing disk 710 now recording changes to the virtual disk, the previous “treehead” differencing disk D2 706 is moved to recovery location 750 as represented by the replicated differencing disk D2 758 shown in Figure 7D.
[00060] With the transfer of the copy consistent with differencing disk application D2 706 to recovery location 750, a merge can again occur at primary location 700. This is depicted in Figure 7D, where the merged virtual disk 708 of Figure 7C now includes D2 to form a new merged virtual disk 712. Replication management at recovery site 750 realizes that replicated differencing disk D2 758, which was now received from primary site 700, is the last copy of the disk to from primary location 700. Since the copy of D2 758 is an application-consistent copy and is also assumed to be crash-consistent, it serves both as an application-consistent copy and as a crash-consistent copy the disk from the primary location 700.
[00061] Figure 7D also illustrates the merging of disks at recovery site 750. The copy consistent with the D1 mirrored crash 754 and the mirrored base virtual disk 752 have been merged in Figure 7D to form the merged mirrored virtual disk 756. As per the mirrored virtual disk 756. snapshots from primary site 700 arrive at recovery site 750, snapshots can be placed and made ready to run. By merging and combining the snapshots received in this manner, if a disaster occurs in which the operation at recovery site 750 will be trusted, potential operational latencies can be mitigated or avoided by merging those selected (or even all) of the copies received at the recovery 750. Thus, one modality involves merging and combining at least some of the snapshots or other copies of the virtual disk as they arrive or at least before the time the replicated data is then being called for use at the recovery site 750.
[00062] The modalities also include storing one or more of the snapshots or other copies received in recovery location 750. In order to revert to a particular disk image, that disk image can be saved to enable the recovery operation from that Score. In modalities that employ multiple types of snapshots (eg crash-consistent copy, application-consistent copy, etc.), one or more of each snapshot type or other similar copy can be preserved to enable recovery of one type one of the snapshot types. For example, crash-consistent copies may be provided to recovery location 750 on a more regular basis than application-consistent copies, which may be defined by policy such as that described in connection with Figure 8. In one modality, the copies Application-consistent copies are delivered less frequently than crash-consistent copies, which is due to the potentially longer preparation and processing time and consequent latency involved in getting an application-consistent copy. In the event of a disaster or other event call for operation at recovery site 750, recovery servers may attempt to start operation from a crash-consistent copy or an application-consistent copy, depending on many factors such as the relative elapsed time from the most recent replicated copy of each type, the urgency to re-establish the operation at recovery location 750, the extent of virtual disk modifications between multiple snapshot types, and so on.
[00063] Figure 7E illustrates the virtual storage trees at primary location 700 and recovery location 750 in response to another crash-consistent copy that is requested for transfer to recovery location 750. In this example, differencing disk D3 710 (Figure 7D) is transferred to recovery site 750 as shown by mirrored differencing disk 760 in Figure 7E. Again, the D3 differencing disk 710 at primary location 700 can be merged into virtual disk 712 to create a new merged differencing disk 714 and another new read/write differencing disk D4 716 can be created to capture the changes to the virtual disk .
[00064] At recovery location 750, the newly received crash-consistent copy 760 is now the most recent copy (head of the tree). In this modality, the D2 application-consistent replicated 758 and the crash-consistent replicated D3 760 are both available as restore points. For example, a primary server at primary site 700 is assumed to fail or otherwise become unable to properly perform its services, and that failure is assumed to occur at a point in time generally corresponding to that depicted in Figure 7E. A recovery virtual machine (or alternatively physical machine) at recovery site 750 can be run using, for example, the most recently received application-consistent replicated copy D3 760. Although the application-consistent copy D3 760 has been received at recovery location 750 ahead of time, this is a copy type that is more likely to reanimate properly at recovery location 750. As noted above, this is due to this “type” of copy, which in this example involved notify applications/software at primary location 700 of the imminent snapshot before the respective snapshot has been taken, thus enabling the software to prepare itself for the snapshot.
[00065] Thus, in one embodiment, a virtual machine or other computing system at a recovery location 750 can be produced using a disk from a plurality of available differencing disks, snapshots, or other states of the replicated virtual storage . In one modality, a differencing disk is created as a child of the particular differencing disk from which the virtual machine runs. In the example in Figure 7F, a differencing disk 762 is created with the differencing disk consistent with replicated application D2 758 as its parent. This differencing disk 762 is then expected and the volumes present on the disk revert to the application-consistent set of snapshots (for example, VSS) associated with the D2 758.
[00066] As Figure 7F illustrates, while preserving a row of differencing disks, it is possible to have multiple differencing disks indicating the same read-only point in the tree. For example, differencing disk consistent with replicated crash D3 760 indicates differencing disk consistent with application D2 758, as well as differencing disk 762 that was created in recovery location 750. differencing disk D3 760 and the differencing disk 762, therefore, represents two different futures regarding the state of the read-only differencing disk D2 758. For example, a user could shrink a reboot virtual machine at recovery location 750 using the virtual disk including the disk differencing disk 762, read-only differencing disk 758 indicated by differencing disk 762 and fused disk 756 indicated by D2 differencing disk 758.
[00067] Thus, in the illustrated example, the automatic or manual selection of a first virtual disk in recovery location 750 can include read-only disk 756 (including base virtual disk and D1), the application-consistent differencing disk D2 758 and differencing disk consistent with D3 760 read/write lock. Alternatively, automatic or manual selection of a second virtual disk can include read-only disk 756 (including base virtual disk and D1), the disk of consistent differencing with the read-only application D2 758 and the differencing disk 762 that was created in recovery site 750. Different recovery scenarios are possible in view of the different “futures” provided by having multiple read/write differencing disks indicating a common parent disk.
[00068] Any one of one or more available virtual disk chains can be selected in recovery location 750. For example, a user might choose to preserve disk consistent with D3 crash 760 due to the fact that the virtual machine did not have the desired data when rebooted using disk usage consistent with D2 758 application. In this case, the virtual machine can run with disk usage consistent with D3 760 crash. Even if a recovery virtual machine is reanimated with usage D2 758 application-consistent disk and a new 762 differencing disk is created that points back to the application-consistent time, the D3 760 crash-consistent disk can be preserved as another possible resuscitation chain.
[00069] The differencing disk 762 could alternatively be created from a different differencing disk. For example, if the last copy consistent with the D3 760 lock is to be used for recovery, then differencing disk 762 could be created with the copy consistent with the D3 760 lock as the parent disk. In one embodiment, this may be accomplished by having an indicator or other link stored on differencing disk 762 indicating or otherwise identifying D3 760 as its parent. An example was depicted in Figure 9, where indicator 956 indicated its parent disk 952. Indicator 956 might need to toggle its state on primary servers 900 so that it indicates the correct image on recovery servers 950. The decision on whether the restore point in Figure 7F should be D2 758, D3 760, or another restore point can be determined automatically based on the settings or determined manually by a user.
[00070] An example of the creation and representative content associated with differencing disk 762 at the recovery site is now described. In this representative mode, differencing disk 762 in retrieval location 750 is empty at its creation. It can be configured to indicate its parent disk, which is D2 758 in this example. When the virtual (or physical) machine begins operation, the information that might need to be written to the new differencing disk 762. For example, the new differencing disk 762 might be connected to a replicated virtual machine in recovery location 750 that has the same characteristics or similar characteristics as a primary virtual machine at primary site 700. When this replicated virtual machine is rebooted, it can consider its virtual disks capable of being both written and read. The new differencing disk 762 can be written to, while the information can be read from the new differencing disk 762, its parent, or previous lineage depending on where the information resides.
[00071] In addition to the differencing disk 762 that serves as the read/write disk when the associated recovery server(s) is(are) running, the differencing disk can also store data before it a time that the replicated storage at recovery location 750 will be used. For example, differencing disk 762 can store data written to a differencing disk received from primary location 750 and that occurred between the time the snapshot was taken at primary location 750 and the time the differencing disk was marked as only reading.
[00072] As an example, assume that a replication management module at primary site 700 requests the working workload of a virtual machine to make an application-consistent copy of virtual storage or other snapshot involving software that prepares itself for the snapshot. In response, the application software can try to make itself coherent for the snapshot, but it can be difficult to coordinate the snapshot that is taken on the virtual disk with the information “flushes” that are occurring in the application. When the application software appears to finish flushing data to storage, the virtual disk snapshot is taken. Therefore, the snapshot differencing disk is marked read-only and a new differencing disk is created. Between the time the virtual disk snapshot is taken and the time the corresponding differencing disk was written read-only, one or more stray data might find its way onto the differencing disk that was the agent of the snapshot. Therefore, it is possible that the differencing disk may not exactly match the instantaneous differencing disk 758 that was moved to recovery location 750. In this case, even without having passed control to recovery location 750, differencing disk 758 may be mounted as a live virtual disk in order to find those isolated writes and return those isolated writes from differencing disk D2 758 and differencing disk 762 created in recovery location 750. In the latter case, this task has already been handled. This function of returning the isolated writes can be performed using the virtual machine that could eventually be recovered or alternatively it could be done as part of a service that mounts the disk image and manipulates it to extract the isolated writes.
[00073] The example of Figures 7A to 7F represents exemplary actions taken on each of the primary location and the recovery location. Figure 10 is a flowchart illustrating representative features from the perspective of at least one primary server at the primary site that must have its storage (or other storage) replicated. This example assumes that virtual storage is being replicated and that multiple types of copies of virtual storage are available.
[00074] In this example, a base virtual disk is provided to the recovery location as represented in block 1000. As shown in block 1002, a differencing disk, or other storage structure, is created at the primary location to record changes to the disk virtual. In this example, some “n” number of different types of snapshots/copies is provided, including A-type copy, B-copy and n-type copy. When replication management or another primary site control module requests a copy of virtual storage as determined in block 1004, it can specify which type of copy is desired. The identification of a copy type can be made by a user through a user interface or configured in hardware or software as required under a policy such as that described in connection with Figure 8, or otherwise .
[00075] In this example, if replication management requested a type A copy as determined in block 1006, a snapshot or other copy of the differencing disk is taken without the software preparing for the virtual storage copy to occur, as shown at block 1012. This could be, for example, a VSS snapshot or another application-consistent snapshot. If a type B copy is requested as determined in block 1008, a snapshot or other differencing disk copy is taken where the least piece of software is prepared for the virtual storage copy to occur, as shown in block 1014. Other types of copies can be defined, thus the copy type can be determined in block 1010 and as shown in block 1016 the snapshot or other copy can be obtained in accordance with the rules for that copy type.
[00076] When the proper snapshot or another copy has been taken, it can be transferred to the recovery location as shown in block 1018. Block 1020 illustrates that the differencing disk that was copied to the primary location is write protected and the block 1022 shows that a new differencing disk will be created to capture changes to the virtual disk. At least for reasons of reducing storage capacity requirements and reducing latency in reading data across the disk chain, intermediate disks can be merged with their parent disk image as shown in block 1024.
[00077] It should be recognized that the particular order of features illustrated in Figure 10 and other flowcharts in the disclosure had not to be interpreted as a limitation of order or sequence. The particular order of operations represented may, in many cases, be irrelevant, unless otherwise described as relevant. For example, the snapshot may or may not be transferred at block 1018 before the copied differencing disk is write protected at block 1020.
[00078] Figure 11 is a flowchart illustrating representative features from the perspective of at least one recovery server at the recovery site that is replicating a virtual machine(s). This example assumes that virtual storage is being replicated and that multiple types of copies of virtual disks are provided by the primary site. As shown in block 1100, the base virtual disk received as a replication of the base virtual disk from the primary site is provided as the base of the virtual disk chain at the recovery site. When a snapshot or other copy is received from the primary location as determined at block 1102, the type of copy received is determined at block 1104. The resulting copy of the differencing disk at the recovery location can be identified as the copy type that is, such as application-consistent, crash-consistent, etc. The pointer or other link in the received copy can be modified as shown in block 1106 in order to make the pointer point to its parent at the recovery location. If desired, the intermediate differencing disks can be merged into their respective parent disks, as shown in block 1110. Additionally, block 1108 shows that a differencing disk can be created to indicate a desired copy in order to return the isolated writes, as described in connection with Figure 7F.
[00079] If and when a failover for the recovery server(s) occurs as determined in block 1112, selecting a copy stored as a selected restore point can be facilitated as shown in block 1114. By For example, facilitating the selection of a stored copy might involve providing a user interface to enable an administrator or other user to select which stored (and thus not merged) copy the replicated virtual machine will use when booted and run. Other arrangements may involve automatically selecting a particular copy based on the criteria. For example, the criteria might automatically cause the virtual machine to first try to reanimate an application-consistent copy and subsequently try a different copy of that reanimation that was not successful enough. In an embodiment depicted at block 1116, a differencing disk is created, or an existing differencing disk used (e.g., a differencing disk created at block 1108), to indicate the selected snapshot or copy. Among other things, this differencing disk provides read/write capability to the replicated virtual disk when the replicated virtual machine is operating.
[00080] In one modality, a test can be run on the replicated virtual machine. In this case, the replicated virtual machine continues to receive changes to the virtual disk as before (for example, receive copies of primary location-differentiated disks), while the test virtual machine is produced from the disk-differentiated point created. Thus, replication can be continued while running a test on the replicated copy of the virtual machine, without making a copy of the replicated virtual machine's virtual hard disks. This provides the way to generate a test copy of the replicated virtual machine using two sets of differencing disks that point to the same parent virtual hard disks. Writes executed from the test virtual machine are captured on a set of differencing disks, and these disks can be discarded when testing is complete. Periodic synchronization changes from differencing disks arriving from the primary server are collected on the other differencing disk set and can be merged into the parent once testing is complete. This option is represented in Figure 11. If a test is to be run as determined in block 1112, a differencing disk is created 1114 that indicates the restore point to be tested and the replicated virtual machine can be rebooted to run the test.
[00081] Figure 12 represents a representative computing system 1200 in which the principles described in this document can be implemented. The computing environment described in connection with Figure 12 is described for purposes of example, as the structural and operational disclosure for replicating storage or virtual storage is applicable in any computing environment. The computing arrangement of Figure 12 can, in some embodiments, be distributed across multiple devices. Furthermore, the description in Figure 12 can represent a server or other computing device at either the primary location or the recovery location.
[00082] The representative computing system 1200 includes a processor 1202 coupled to a number of modules through a 1204 system bus. The represented system bus 1204 represents any type of bus structure(s) that can be directly or indirectly coupled to the various components and modules of the computing environment. Among the various components are storage devices, any of which can store matter for replication.
[00083] A 1206 read-only memory (ROM) may be provided to store the firmware used by the 1202 processor. The 1206 ROM represents any type of read-only memory, such as programmable ROM (PROM), erasable PROM (EPROM), or the like . Host or system bus 1204 may be coupled to a memory controller 1214, which, in turn, is coupled to memory 1208 via a memory bus 1216. Exemplary memory 1208 may store, for example, all or portions of a hypervisor 1210 or other virtualization software, an operating system 1218, and a module such as a replication management module (RMM) 1212 that performs at least those functions described herein. RMM 1212 can be implemented as part of, for example, hypervisor 1210 and/or operating system 1218.
[00084] Memory can also store 1220 application programs and other 1222 programs and 1224 data. Additionally, all or part of virtual storage 1226 can be stored in memory 1208. However, due to the potential size of virtual storage disks, a modality involves storing virtual storage disks in storage devices versus memory, as represented by the 1226B virtual storage associated with any one or more of the representative storage devices 1234, 1240, 1244,1248. Virtual storage 1226A in memory 1208 may also represent any portion of virtual storage that is temporarily in buffer provisioning storage in memory as an intermediate step to be processed, transmitted, or stored in a storage device(s) 1234, 1240, 1244, 1248.
[00085] Figure 12 illustrates several representative storage devices where data can be stored and/or virtual storage can be stored. For example, the system bus can be coupled to an internal storage interface 1230, which can be coupled to a drive(s) 1232 such as a hard drive. Storage means 1234 is associated with or otherwise operable with the units. Examples of such storage include hard disks and other magnetic or optical media, flash memory and other solid state devices, etc. The 1230 internal storage interface can utilize any type of volatile or non-volatile storage. Data, including virtual hard disks (eg, VHD files, AVHD files, etc.), can be stored on such storage medium 1234.
[00086] Similarly, a 1236 interface for removable media can also be coupled to bus 1204. Drives 1238 can be coupled to removable storage interface 1236 to accept and act on removable storage 1240 such as, for example, floppy disks, optical disks, memory cards, flash memory, external hard drives, etc. Virtual storage files and other data can be stored on such removable storage 1240.
[00087] In some cases, a 1242 host adapter can be provided to access external 1244 storage. For example, the 1242 host adapter can interface with external storage devices through a small computer system interface (SCSI), Fiber Channel, advanced serial attached technology (SATA) or eSATA, and/or other analog interfaces that can connect to 1244 external storage. Through a 1246 network interface, yet other remote storage can be accessible to the computing system 1200. For example, the wired and wireless transceivers associated with the 1246 network interface enable communications with 1248 storage devices over one or more 1250 networks. The 1248 storage devices can represent discrete storage devices or storage associated with another computer system, server, etc. Communications with remote storage devices and systems can be accomplished through wired local area networks (LANs), wireless LANs and/or larger networks including global area networks (GANs) such as the Internet. Virtual storage files and other data can be stored on such external storage devices 1244, 1248.
[00088] As described in this document, the primary and recovery servers communicate information such as snapshots or other copies. Communications between servers can be via direct wiring, peer-to-peer networks, networks based on local infrastructure (eg wired and/or wireless local area networks), off-site networks such as metropolitan area networks and other wide area networks, global area networks, etc. A transmitter 1252 and receiver 1254 are depicted in Figure 12 to represent the computing device's structural ability to transmit and/or receive data in any of these or other communication methodologies. Transmitter 1252 and/or receiver 1254 devices can be stand-alone components, can be integrated as a transceiver(s), can be integrated into or existing part of other communication devices such as network interface 1246, etc. Where computing system 1200 represents a server or other computing device at the primary site, all or part of the virtual disk or other stored data to be replicated may be transmitted via transmitter 1252, if it is a standalone device, integrated with a receiver 1254, integrated with the 1246 network interface, etc. Similarly, where computing system 1200 represents a server or other computing device at the recovery site, all or part of the virtual disk or other stored data to be replicated may be received via receiver 1254, if it is a standalone device, integrated with a 1252 transmitter, integrated with the 1246 network interface, etc. As computing system 1200 can represent a server(s) at any one of the primary or recovery location, block 1256 represents the primary or recovery server(s) that are in communication with the computer system. 1200 compute that represents the other of the primary or recovery server(s).
[00089] As demonstrated in previous examples, the modalities described in this document facilitate disaster recovery and other replication features. In various embodiments, methods are described that can be performed on a computing device, such as by providing software modules that are executable by means of a processor (which includes a physical processor and/or logical processor, controller, etc.). ). Methods may also be stored on computer-readable media that can be accessed and read by the processor and/or circuitry that prepares the information for processing by the processor. Having instructions stored on a computer-readable medium as described in this document is distinguishable from having propagated or transmitted instructions, as propagation transfers instructions versus stores instructions as can occur with a computer-readable medium that has instructions stored on it. Therefore, unless otherwise noted, references to computer readable media/medium that have instructions stored on it, in this or an analogous form, references tangible media on which data may be stored or retained.
[00090] Although the subject matter has been described in language specific to structural resources and/or methodological acts, it should be understood that the subject matter defined in the appended claims is not necessarily limited to the specific resources or acts described above. Instead, the resources and specific acts described above are revealed as representative ways of implementing the claims.
权利要求:
Claims (7)
[0001]
1. Apparatus for managing replicated virtual storage at recovery sites characterized in that it comprises:Replicated virtual storage of a replicated virtual machine (600), including at least one replicated base virtual disk (752, 756) corresponding to a virtual disk of primary base (702) to be replicated; a receiver configured to receive a plurality of copies of differencing disks (704, 706, 710) of a plurality of copy types, each differencing disk (704, 706, 710) linked in a tree structure to a respective parent disk that was created before the differencing disk (704, 706, 710), the respective parent disk being a parent differencing disk or the primary base virtual disk (702), where a first of the plurality of copy types comprises a failed-consistent copy which is a general snapshot of a storage of a running system, and a second of the plurality of copy types comprises a copy application-consistent which is a running system storage snapshot created in response to notifying software on the system that a copy must be made; and a replication management module configured to organize the copies received from the differencing disks (704, 706, 710) of the plurality of copy types with respect to the replicated base virtual disk (752, 756) as the differencing disks (704, 706 , 710) were arranged in relation to the primary base virtual disk (702).
[0002]
2. Apparatus according to claim 1, characterized in that the replication management module is further configured to store one or more of the copies received from the differencing disks (704, 706, 710) as potential restore points to initialize operation of the replicated virtual machine.
[0003]
3. Apparatus according to claim 2, characterized in that it further comprises creating a read-write differencing disk as a child disk of one of the stored copies of the differencing disks (704, 706, 710) to store changes on the replicated virtual machine when booted.
[0004]
4. Apparatus, according to claim 2, characterized by the fact that the replication management module is further configured to facilitate startup of the replicated virtual machine from a selected among potential restore points and one or more selected stored copies of the differencing disks (704, 706, 710) that sequentially follow the selected restore point.
[0005]
5. Machine-implemented method for managing replicated virtual storage at recovery sites on a replicated virtual storage of a replicated virtual machine, including at least one replicated base virtual disk (752, 756) corresponding to a primary basic virtual disk (702) a to be replicated, the method characterized in that it comprises the steps of: receiving a plurality of copies of differencing disks (704, 706, 710) from a plurality of copy types, each differencing disk (704, 706, 710) attached to a respective parent disk that was created before the differencing disk (704, 706, 710), the respective parent disk being a parent differencing disk or a primary base virtual disk (702), where a first of the plurality of copy types comprises a failing-consistent copy which is a general storage snapshot of a running system, and a second of the plurality of copy types comprises a failing-consistent copy. an application which is a running system storage snapshot created in response to notifying software on the system that a copy must be made; and organizing, in a replication management module, the copies received from the differencing disks (704, 706, 710) of the plurality of copy types with respect to the replicated base virtual disk (752, 756) as the differencing disks (704, 706, 710) were arranged relative to the primary base virtual disk (702).
[0006]
6. Computer-implemented method according to claim 5, characterized in that it further comprises the step of obtaining software data from one or more notified applications in response to notifying software.
[0007]
7. A computer-readable medium for managing replicated virtual storage at recovery sites, characterized in that it has a method therein that is executable by a computer system to perform the method steps as defined in any one of claims 5 to 6 .
类似技术:
公开号 | 公开日 | 专利标题
BR112013032923B1|2021-08-24|APPARATUS, COMPUTER IMPLEMENTED METHOD AND COMPUTER-READABLE MEDIA TO MANAGE REPLICATED VIRTUAL STORAGE IN RECOVERY SITES
US11086555B1|2021-08-10|Synchronously replicating datasets
US10992598B2|2021-04-27|Synchronously replicating when a mediation service becomes unavailable
US9400611B1|2016-07-26|Data migration in cluster environment using host copy and changed block tracking
US9460028B1|2016-10-04|Non-disruptive and minimally disruptive data migration in active-active clusters
EP3032396A1|2016-06-15|OSSITL OpenStack swift auditing for tape library
US11003364B2|2021-05-11|Write-once read-many compliant data storage cluster
US20210360066A1|2021-11-18|Utilizing Cloud-Based Storage Systems To Support Synchronous Replication Of A Dataset
WO2017014814A1|2017-01-26|Replicating memory volumes
US20210303164A1|2021-09-30|Managing host mappings for replication endpoints
US20220030062A1|2022-01-27|Replication handling among distinct networks
NZ714756B2|2017-01-05|Managing replicated virtual storage at recovery sites
NZ619304B2|2016-03-30|Managing replicated virtual storage at recovery sites
Mukherjee2015|Benchmarking Hadoop performance on different distributed storage systems
同族专利:
公开号 | 公开日
CN103608786A|2014-02-26|
CO6852032A2|2014-01-30|
NZ714756A|2016-09-30|
KR101983405B1|2019-05-29|
EP2721498A2|2014-04-23|
US9785523B2|2017-10-10|
MX2013015361A|2014-02-11|
US20120324183A1|2012-12-20|
ES2600914T3|2017-02-13|
ZA201308547B|2015-02-25|
NZ619304A|2015-12-24|
KR20140035417A|2014-03-21|
MY173030A|2019-12-19|
IL230035A|2017-05-29|
JP2014520344A|2014-08-21|
WO2012177445A3|2013-03-21|
CA2839014A1|2012-12-27|
CN103608786B|2017-01-18|
RU2619894C2|2017-05-19|
RU2013156675A|2015-06-27|
AU2012273366B2|2016-12-08|
EP2721498B1|2016-08-03|
EP2721498A4|2015-03-25|
WO2012177445A2|2012-12-27|
CA2839014C|2019-01-15|
BR112013032923A2|2017-01-24|
CL2013003615A1|2014-08-08|
MX348328B|2017-06-07|
JP6050342B2|2016-12-21|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

US6795966B1|1998-05-15|2004-09-21|Vmware, Inc.|Mechanism for restoring, porting, replicating and checkpointing computer systems using state extraction|
US7143307B1|2002-03-15|2006-11-28|Network Appliance, Inc.|Remote disaster recovery and data migration using virtual appliance migration|
US7093086B1|2002-03-28|2006-08-15|Veritas Operating Corporation|Disaster recovery and backup using virtual machines|
US7840963B2|2004-10-15|2010-11-23|Microsoft Corporation|Marking and utilizing portions of memory state information during a switch between virtual machines to minimize software service interruption|
US7899788B2|2005-04-01|2011-03-01|Microsoft Corporation|Using a data protection server to backup and restore data on virtual servers|
US20070208918A1|2006-03-01|2007-09-06|Kenneth Harbin|Method and apparatus for providing virtual machine backup|
US8321377B2|2006-04-17|2012-11-27|Microsoft Corporation|Creating host-level application-consistent backups of virtual machines|
US7613750B2|2006-05-29|2009-11-03|Microsoft Corporation|Creating frequent application-consistent backups efficiently|
US8407518B2|2007-10-26|2013-03-26|Vmware, Inc.|Using virtual machine cloning to create a backup virtual machine in a fault tolerant system|
US8364643B2|2007-12-04|2013-01-29|Red Hat Israel, Ltd.|Method and system thereof for restoring virtual desktops|
JP2009146169A|2007-12-14|2009-07-02|Fujitsu Ltd|Storage system, storage device, and data backup method|
US8117410B2|2008-08-25|2012-02-14|Vmware, Inc.|Tracking block-level changes using snapshots|
US8499297B2|2008-10-28|2013-07-30|Vmware, Inc.|Low overhead fault tolerance through hybrid checkpointing and replay|
US8954645B2|2011-01-25|2015-02-10|International Business Machines Corporation|Storage writes in a mirrored virtual machine system|JP4863605B2|2004-04-09|2012-01-25|株式会社日立製作所|Storage control system and method|
US9037901B2|2011-08-19|2015-05-19|International Business Machines Corporation|Data set autorecovery|
US9767274B2|2011-11-22|2017-09-19|Bromium, Inc.|Approaches for efficient physical to virtual disk conversion|
US9372910B2|2012-01-04|2016-06-21|International Business Machines Corporation|Managing remote data replication|
US8990815B1|2012-02-01|2015-03-24|Symantec Corporation|Synchronizing allocated blocks of virtual disk files across primary and secondary volumes by excluding unused blocks|
US8966318B1|2012-04-27|2015-02-24|Symantec Corporation|Method to validate availability of applications within a backup image|
US8850146B1|2012-07-27|2014-09-30|Symantec Corporation|Backup of a virtual machine configured to perform I/O operations bypassing a hypervisor|
US10248619B1|2012-09-28|2019-04-02|EMC IP Holding Company LLC|Restoring a virtual machine from a copy of a datastore|
US9104331B2|2012-09-28|2015-08-11|Emc Corporation|System and method for incremental virtual machine backup using storage system functionality|
US9110604B2|2012-09-28|2015-08-18|Emc Corporation|System and method for full virtual machine backup using storage system functionality|
US9286086B2|2012-12-21|2016-03-15|Commvault Systems, Inc.|Archiving virtual machines in a data storage system|
US20140181044A1|2012-12-21|2014-06-26|Commvault Systems, Inc.|Systems and methods to identify uncharacterized and unprotected virtual machines|
US10162873B2|2012-12-21|2018-12-25|Red Hat, Inc.|Synchronization of physical disks|
US9703584B2|2013-01-08|2017-07-11|Commvault Systems, Inc.|Virtual server agent load balancing|
US9495404B2|2013-01-11|2016-11-15|Commvault Systems, Inc.|Systems and methods to process block-level backup for selective file restoration for virtual machines|
US9286110B2|2013-01-14|2016-03-15|Commvault Systems, Inc.|Seamless virtual machine recall in a data storage system|
KR101544899B1|2013-02-14|2015-08-17|주식회사 케이티|Backup system and backup method in virtualization environment|
US9430255B1|2013-03-15|2016-08-30|Google Inc.|Updating virtual machine generated metadata to a distribution service for sharing and backup|
US9842053B2|2013-03-15|2017-12-12|Sandisk Technologies Llc|Systems and methods for persistent cache logging|
US9582297B2|2013-05-16|2017-02-28|Vmware, Inc.|Policy-based data placement in a virtualized computing environment|
US9424056B1|2013-06-28|2016-08-23|Emc Corporation|Cross site recovery of a VM|
US9329931B2|2013-07-24|2016-05-03|Seagate Technology Llc|Solid state drive emergency pre-boot application providing expanded data recovery function|
US9858154B1|2013-08-23|2018-01-02|Acronis International Gmbh|Agentless file backup of a virtual machine|
US20150067678A1|2013-08-27|2015-03-05|Connectloud, Inc.|Method and apparatus for isolating virtual machine instances in the real time event stream from a tenant data center|
US20150066860A1|2013-08-27|2015-03-05|Connectloud, Inc.|Method and apparatus for utilizing virtual machine instance information from a database for software defined cloud recovery|
US20150067679A1|2013-08-28|2015-03-05|Connectloud, Inc.|Method and apparatus for software defined cloud workflow recovery|
US20150074536A1|2013-09-12|2015-03-12|Commvault Systems, Inc.|File manager integration with virtualization in an information management system, including user control and storage management of virtual machines|
US10042579B1|2013-09-24|2018-08-07|EMC IP Holding Company LLC|Crash consistent snapshot|
US9882980B2|2013-10-22|2018-01-30|International Business Machines Corporation|Managing continuous priority workload availability and general workload availability between sites at unlimited distances for products and services|
US9465855B2|2013-10-22|2016-10-11|International Business Machines Corporation|Maintaining two-site configuration for workload availability between sites at unlimited distances for products and services|
US9389970B2|2013-11-01|2016-07-12|International Business Machines Corporation|Selected virtual machine replication and virtual machine restart techniques|
US9304871B2|2013-12-02|2016-04-05|International Business Machines Corporation|Flash copy for disaster recoverytesting|
US9286366B2|2013-12-02|2016-03-15|International Business Machines Corporation|Time-delayed replication for data archives|
US9262290B2|2013-12-02|2016-02-16|International Business Machines Corporation|Flash copy for disaster recoverytesting|
US20150154042A1|2013-12-04|2015-06-04|Hitachi, Ltd.|Computer system and control method for virtual machine|
US9436489B2|2013-12-20|2016-09-06|Red Hat Israel, Ltd.|Virtual machine data replication with shared resources|
US10146634B1|2014-03-31|2018-12-04|EMC IP Holding Company LLC|Image restore from incremental backup|
US9606873B2|2014-05-13|2017-03-28|International Business Machines Corporation|Apparatus, system and method for temporary copy policy|
US9619342B2|2014-06-24|2017-04-11|International Business Machines Corporation|Back up and recovery in virtual machine environments|
CN105446826A|2014-06-30|2016-03-30|国际商业机器公司|Virtual machine backup and recovery method and device|
US20160019317A1|2014-07-16|2016-01-21|Commvault Systems, Inc.|Volume or virtual machine level backup and generating placeholders for virtual machine files|
US9639340B2|2014-07-24|2017-05-02|Google Inc.|System and method of loading virtual machines|
US10140303B1|2014-08-22|2018-11-27|Nexgen Storage, Inc.|Application aware snapshots|
US10296320B2|2014-09-10|2019-05-21|International Business Machines Corporation|Patching systems and applications in a virtualized environment|
US9710465B2|2014-09-22|2017-07-18|Commvault Systems, Inc.|Efficiently restoring execution of a backed up virtual machine based on coordination with virtual-machine-file-relocation operations|
US9417968B2|2014-09-22|2016-08-16|Commvault Systems, Inc.|Efficiently restoring execution of a backed up virtual machine based on coordination with virtual-machine-file-relocation operations|
US9436555B2|2014-09-22|2016-09-06|Commvault Systems, Inc.|Efficient live-mount of a backed up virtual machine in a storage management system|
US10776209B2|2014-11-10|2020-09-15|Commvault Systems, Inc.|Cross-platform virtual machine backup and replication|
US9983936B2|2014-11-20|2018-05-29|Commvault Systems, Inc.|Virtual machine change block tracking|
US9928092B1|2014-11-25|2018-03-27|Scale Computing|Resource management in a virtual machine cluster|
US10120594B1|2014-11-25|2018-11-06|Scale Computing Inc|Remote access latency in a reliable distributed computing system|
JP6299640B2|2015-03-23|2018-03-28|横河電機株式会社|Communication device|
US10645164B1|2015-10-27|2020-05-05|Pavilion Data Systems, Inc.|Consistent latency for solid state drives|
US10719305B2|2016-02-12|2020-07-21|Nutanix, Inc.|Virtualized file server tiers|
JP6458752B2|2016-03-04|2019-01-30|日本電気株式会社|Storage control device, storage system, storage control method, and program|
US10592350B2|2016-03-09|2020-03-17|Commvault Systems, Inc.|Virtual server cloud file system for virtual machine restore to cloud operations|
US11218418B2|2016-05-20|2022-01-04|Nutanix, Inc.|Scalable leadership election in a multi-processing computing environment|
CN106445730B|2016-07-22|2019-12-03|平安科技(深圳)有限公司|A kind of method and terminal improving virtual machine performance|
US10521453B1|2016-09-07|2019-12-31|United Services Automobile Association |Selective DNS synchronization|
CN106569872A|2016-09-28|2017-04-19|浪潮电子信息产业股份有限公司|Method for shortening virtual machine snapshot chain|
US10747630B2|2016-09-30|2020-08-18|Commvault Systems, Inc.|Heartbeat monitoring of virtual machines for initiating failover operations in a data storage management system, including operations by a master monitor node|
US10162528B2|2016-10-25|2018-12-25|Commvault Systems, Inc.|Targeted snapshot based on virtual machine location|
US10152251B2|2016-10-25|2018-12-11|Commvault Systems, Inc.|Targeted backup of virtual machine|
US10678758B2|2016-11-21|2020-06-09|Commvault Systems, Inc.|Cross-platform virtual machine data and memory backup and replication|
US10728090B2|2016-12-02|2020-07-28|Nutanix, Inc.|Configuring network segmentation for a virtualization environment|
US10824455B2|2016-12-02|2020-11-03|Nutanix, Inc.|Virtualized server systems and methods including load balancing for virtualized file servers|
US11048595B2|2016-12-05|2021-06-29|Nutanix, Inc.|Disaster recovery for distributed file servers, including metadata fixers|
US10318166B1|2016-12-28|2019-06-11|EMC IP Holding Company LLC|Preserving locality of storage accesses by virtual machine copies in hyper-converged infrastructure appliances|
US20180276085A1|2017-03-24|2018-09-27|Commvault Systems, Inc.|Virtual machine recovery point generation|
US10387073B2|2017-03-29|2019-08-20|Commvault Systems, Inc.|External dynamic virtual machine synchronization|
CN108733509B|2017-04-17|2021-12-10|伊姆西Ip控股有限责任公司|Method and system for backing up and restoring data in cluster system|
US10581897B1|2017-07-26|2020-03-03|EMC IP Holding Company LLC|Method and system for implementing threat intelligence as a service|
US10469518B1|2017-07-26|2019-11-05|EMC IP Holding Company LLC|Method and system for implementing cyber security as a service|
WO2019112955A1|2017-12-08|2019-06-13|Rubrik, Inc.|Sharding of full and incremental snapshots|
US10621046B2|2017-12-08|2020-04-14|Rubrik, Inc.|Blobstore system for the management of large data objects|
US11132331B2|2017-12-12|2021-09-28|Rubrik, Inc.|Sharding of full and incremental snapshots|
US10949306B2|2018-01-17|2021-03-16|Arista Networks, Inc.|System and method of a cloud service provider virtual machine recovery|
US10789135B2|2018-02-07|2020-09-29|Microsoft Technology Licensing, Llc|Protection of infrastructure-as-a-service workloads in public cloud|
US10877928B2|2018-03-07|2020-12-29|Commvault Systems, Inc.|Using utilities injected into cloud-based virtual machines for speeding up virtual machine backup operations|
US11086826B2|2018-04-30|2021-08-10|Nutanix, Inc.|Virtualized server systems and methods including domain joining techniques|
US11126504B2|2018-07-10|2021-09-21|EMC IP Holding Company LLC|System and method for dynamic configuration of backup agents|
US11194680B2|2018-07-20|2021-12-07|Nutanix, Inc.|Two node clusters recovery on a failure|
US10969986B2|2018-11-02|2021-04-06|EMC IP Holding Company LLC|Data storage system with storage container pairing for remote replication|
US10809935B2|2018-12-17|2020-10-20|Vmware, Inc.|System and method for migrating tree structures with virtual disks between computing environments|
US10768971B2|2019-01-30|2020-09-08|Commvault Systems, Inc.|Cross-hypervisor live mount of backed up virtual machine data|
US10996974B2|2019-01-30|2021-05-04|Commvault Systems, Inc.|Cross-hypervisor live mount of backed up virtual machine data, including management of cache storage for virtual machine data|
法律状态:
2017-11-07| B25A| Requested transfer of rights approved|Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC (US) |
2018-12-11| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]|
2019-10-29| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]|
2021-03-16| B06A| Patent application procedure suspended [chapter 6.1 patent gazette]|
2021-06-15| B09A| Decision: intention to grant [chapter 9.1 patent gazette]|
2021-08-24| B16A| Patent or certificate of addition of invention granted [chapter 16.1 patent gazette]|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 13/06/2012, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
申请号 | 申请日 | 专利标题
US13/163,760|US9785523B2|2011-06-20|2011-06-20|Managing replicated virtual storage at recovery sites|
US13/163,760|2011-06-20|
PCT/US2012/042107|WO2012177445A2|2011-06-20|2012-06-13|Managing replicated virtual storage at recovery sites|
[返回顶部]